26 research outputs found

    Automated disease classification in (Selected) agricultural crops using transfer learning

    Get PDF
    The biotic stress of agricultural crops is a major concern across the globe. Especially, its major effects are felt in economically poor countries where advanced facilities for diagnosis of a disease is limited as well as lack of awareness among the farmers. A recent revolution in smartphone technology and deep learning techniques have created an opportunity for automated classification of disease. In this study images acquired through smartphone are transmitted to a personal computer via a wireless Local Area Network (LAN) for classification of ten different diseases using transfer learning in four major agricultural crops which are least explored. Six pre-trained Convolutional Neural Network (CNN) have been used namely AlexNet, Visual Geometry Group 16 (VGG16), VGG19, GoogLeNet, ResNet101 and DenseNet201 with its corresponding results explored. GoogLeNet resulted in the best validation accuracy of 97.3%. The misclassification was mainly due to Tobacco Mosaic Virus (TMV) and two-spotted spider mite. In test conditions, images were classified in real-time and prediction scores have been evaluated for each disease class. It depicted a reduction in accuracy in all models with VGG16 resulting in the best accuracy of 90%. Various factors contributing to the reduction in accuracy and future scope for improvement have been elucidated

    BioSpec: A Biophysically-Based Spectral Model of Light Interaction with Human Skin

    Get PDF
    Despite the notable progress in physically-based rendering, there is still a long way to go before we can automatically generate predictable images of biological materials. In this thesis, we address an open problem in this area, namely the spectral simulation of light interaction with human skin, and propose a novel biophysically-based model that accounts for all components of light propagation in skin tissues, namely surface reflectance, subsurface reflectance and transmittance, and the biological mechanisms of light absorption by pigments in these tissues. The model is controlled by biologically meaningful parameters, and its formulation, based on standard Monte Carlo techniques, enables its straightforward incorporation into realistic image synthesis frameworks. Besides its biophysicallybased nature, the key difference between the proposed model and the existing skin models is its comprehensiveness, i. e. , it computes both spectral (reflectance and transmittance) and scattering (bidirectional surface-scattering distribution function) quantities for skin specimens. In order to assess the predictability of our simulations, we evaluate their accuracy by comparing results from the model with actual skin measured data. We also present computer generated images to illustrate the flexibility of the proposed model with respect to variations in the biological input data, and its applicability not only in the predictive image synthesis of different skin tones, but also in the spectral simulation of medical conditions

    An Introduction to Light Interaction with Human Skin

    Get PDF
    Despite the notable progress in physically-based rendering, there is still a long way to go before one can automatically generate predictable images of organic materials such as human skin. In this tutorial, the main physical and biological aspects involved in the processes of propagation and absorption of light by skin tissues are examined. These processes affect not only skin appearance, but also its health. For this reason, they have also been the object of study in biomedical research. The models of light interaction with human skin developed by the biomedical community are mainly aimed at the simulation of skin spectral properties which are used to determine the concentration and distribution of various substances. In computer graphics, the focus has been on the simulation of light scattering properties that affect skin appearance. Computer models used to simulate these spectral and scattering properties are described in this tutorial, and their strengths and limitations discussed. Keywords: natural phenomena, biologically and physically-based rendering

    A fused lightweight CNN model for the diagnosis of COVID-19 using CT scan images

    Get PDF
    Computed tomography is an effective tool that can be used for the fast diagnosis of COVID-19. However, in high case-load scenarios, there are chances of delay and human error in interpreting the scan images manually by an expert. An artificial intelligence (AI) based automated tool can be employed for fast and efficient diagnosis of this disease. For image-based diagnosis, convolutional neural networks (CNN) which is a subcategory of AI has been widely explored. However, these CNN models require significant computational resources for processing. Hence in this work, the performance of two lightweight least explored CNN models, namely SqueezeNet and ShuffleNet have been evaluated with CT scan images. While SqueezeNet produced an accuracy of 86.4%, ShuffleNet was able to provide an accuracy of 95.8%. Later, in order to improve the accuracy, a novel fused-model combining these two models has been developed and its performance has been evaluated. The fused-model outperformed the two base models with an overall accuracy of 97%. The analysis of the confusion matrix revealed an improved specificity of 96.08% and precision of 96.15% with a better fallout and false discovery rate of 3.91% and 3.84%, respectively

    Task-based agricultural mobile robots in arable farming: A review

    Get PDF
    In agriculture (in the context of this paper, the terms “agriculture” and “farming” refer to only the farming of crops and exclude the farming of animals), smart farming and automated agricultural technology have emerged as promising methodologies for increasing the crop productivity without sacrificing produce quality. The emergence of various robotics technologies has facilitated the application of these techniques in agricultural processes. However, incorporating this technology in farms has proven to be challenging because of the large variations in shape, size, rate and type of growth, type of produce, and environmental requirements for different types of crops. Agricultural processes are chains of systematic, repetitive, and time-dependent tasks. However, some agricultural processes differ based on the type of farming, namely permanent crop farming and arable farming. Permanent crop farming includes permanent crops or woody plants such as orchards and vineyards whereas arable farming includes temporary crops such as wheat and rice. Major operations in open arable farming include tilling, soil analysis, seeding, transplanting, crop scouting, pest control, weed removal and harvesting where robots can assist in performing all of these tasks. Each specific operation requires axillary devices and sensors with specific functions. This article reviews the latest advances in the application of mobile robots in these agricultural operations for open arable farming and provide an overview of the systems and techniques that are used. This article also discusses various challenges for future improvements in using reliable mobile robots for arable farmin

    Evaluation of the potential of Near Infrared Hyperspectral Imaging for monitoring the invasive brown marmorated stink bug

    Full text link
    The brown marmorated stink bug (BMSB), Halyomorpha halys, is an invasive insect pest of global importance that damages several crops, compromising agri-food production. Field monitoring procedures are fundamental to perform risk assessment operations, in order to promptly face crop infestations and avoid economical losses. To improve pest management, spectral cameras mounted on Unmanned Aerial Vehicles (UAVs) and other Internet of Things (IoT) devices, such as smart traps or unmanned ground vehicles, could be used as an innovative technology allowing fast, efficient and real-time monitoring of insect infestations. The present study consists in a preliminary evaluation at the laboratory level of Near Infrared Hyperspectral Imaging (NIR-HSI) as a possible technology to detect BMSB specimens on different vegetal backgrounds, overcoming the problem of BMSB mimicry. Hyperspectral images of BMSB were acquired in the 980-1660 nm range, considering different vegetal backgrounds selected to mimic a real field application scene. Classification models were obtained following two different chemometric approaches. The first approach was focused on modelling spectral information and selecting relevant spectral regions for discrimination by means of sparse-based variable selection coupled with Soft Partial Least Squares Discriminant Analysis (s-Soft PLS-DA) classification algorithm. The second approach was based on modelling spatial and spectral features contained in the hyperspectral images using Convolutional Neural Networks (CNN). Finally, to further improve BMSB detection ability, the two strategies were merged, considering only the spectral regions selected by s-Soft PLS-DA for CNN modelling.Comment: Accepted manuscrip

    Disease classification in Solanum melongena using deep learning

    Get PDF
    Aim of study: The application of pre-trained deep learning models, AlexNet and VGG16, for classification of five diseases (Epilachna beetle infestation, little leaf, Cercospora leaf spot, two-spotted spider mite and Tobacco Mosaic Virus (TMV)) and a healthy plant in Solanum melongena (brinjal in Asia, eggplant in USA and aubergine in UK) with images acquired from smartphones.Area of study: Images were acquired from fields located at Alangudi (Pudukkottai district), Tirumalaisamudram and Pillayarpatti (Thanjavur district) – Tamil Nadu, India.Material and methods: Most of earlier studies have been carried out with images of isolated leaf samples, whereas in this work the whole or part of the plant images were utilized for the dataset creation. Augmentation techniques were applied to the manually segmented images for increasing the dataset size. The classification capability of deep learning models was analysed before and after augmentation. A fully connected layer was added to the architecture and evaluated for its performance.Main results: The modified architecture of VGG16 trained with the augmented dataset resulted in an average validation accuracy of 96.7%. Despite the best accuracy, all the models were tested with sample images from the field and the modified VGG16 resulted in an accuracy of 93.33%.Research highlights: The findings provide a guidance for possible factors to be considered in future research relevant to the dataset creation and methodology for efficient prediction using deep learning models
    corecore